Goto

Collaborating Authors

 multiple kernel learning


Non-parametric Group Orthogonal Matching Pursuit for Sparse Learning with Multiple Kernels

Neural Information Processing Systems

We consider regularized risk minimization in a large dictionary of Reproducing kernel Hilbert Spaces (RKHSs) over which the target function has a sparse representation. This setting, commonly referred to as Sparse Multiple Kernel Learning (MKL), may be viewed as the non-parametric extension of group sparsity in linear models.



The Local Complexity of null

Neural Information Processing Systems

Previous local approaches analyzed the case p = 1 only while our analysis covers all cases 1 p, assuming the different feature mappings corresponding to the different kernels to be uncorrelated.


Multiple Operator valued Kernel Learning

Neural Information Processing Systems

Positive definite operator-valued kernels generalize the well-known notion of reproducing kernels, and are naturally adapted to multi-output learning situations. This paper addresses the problem of learning a finite linear combination of infinite-dimensional operator-valued kernels which are suitable for extending functional data analysis methods to nonlinear contexts.


Learning Kernels Using Local Rademacher Complexity

Neural Information Processing Systems

We use the notion of local Rademacher complexity to design new algorithms for learning kernels. Our algorithms thereby benefit from the sharper learning bounds based on that notion which, under certain general conditions, guarantee a faster convergence rate. We devise two new learning kernel algorithms: one based on a convex optimization problem for which we give an efficient solution using existing learning kernel techniques, and another one that can be formulated as a DC-programming problem for which we describe a solution in detail. We also report the results of experiments with both algorithms in both binary and multi-class classification tasks.


Neural Generalization of Multiple Kernel Learning

Ghanizadeh, Ahmad Navid, Ghiasi-Shirazi, Kamaledin, Monsefi, Reza, Qaraei, Mohammadreza

arXiv.org Artificial Intelligence

Multiple Kernel Learning is a conventional way to learn the kernel function in kernel-based methods. MKL algorithms enhance the performance of kernel methods. However, these methods have a lower complexity compared to deep learning models and are inferior to these models in terms of recognition accuracy. Deep learning models can learn complex functions by applying nonlinear transformations to data through several layers. In this paper, we show that a typical MKL algorithm can be interpreted as a one-layer neural network with linear activation functions. By this interpretation, we propose a Neural Generalization of Multiple Kernel Learning (NGMKL), which extends the conventional multiple kernel learning framework to a multi-layer neural network with nonlinear activation functions. Our experiments on several benchmarks show that the proposed method improves the complexity of MKL algorithms and leads to higher recognition accuracy.


Unifying Framework for Fast Learning Rate of Non-Sparse Multiple Kernel Learning

Neural Information Processing Systems

In this paper, we give a new generalization error bound of Multiple Kernel Learning (MKL) for a general class of regularizations. Our main target in this paper is dense type regularizations including ℓp-MKL that imposes ℓp-mixed-norm regularization instead of ℓ1-mixed-norm regularization. According to the recent numerical experiments, the sparse regularization does not necessarily show a good performance compared with dense type regularizations. Motivated by this fact, this paper gives a general theoretical tool to derive fast learning rates that is applicable to arbitrary monotone norm-type regularizations in a unifying manner. As a by-product of our general result, we show a fast learning rate of ℓp-MKL that is tightest among existing bounds.


Metric Learning with Multiple Kernels

Neural Information Processing Systems

Metric learning has become a very active research field. The most popular representative--Mahalanobis metric learning--can be seen as learning a linear transformation and then computing the Euclidean metric in the transformed space. Since a linear transformation might not always be appropriate for a given learning problem, kernelized versions of various metric learning algorithms exist. However, the problem then becomes finding the appropriate kernel function. Multiple kernel learning addresses this limitation by learning a linear combination of a number of predefined kernels; this approach can be also readily used in the context of multiple-source learning to fuse different data sources.


EEG-based Emotion Recognition Using Multiple Kernel Learning - Machine Intelligence Research

#artificialintelligence

Emotion recognition based on electroencephalography (EEG) has a wide range of applications and has great potential value, so it has received increasing attention from academia and industry in recent years. Meanwhile, multiple kernel learning (MKL) has also been favored by researchers for its data-driven convenience and high accuracy. However, there is little research on MKL in EEG-based emotion recognition. Therefore, this paper is dedicated to exploring the application of MKL methods in the field of EEG emotion recognition and promoting the application of MKL methods in EEG emotion recognition. Thus, we proposed a support vector machine (SVM) classifier based on the MKL algorithm EasyMKL to investigate the feasibility of MKL algorithms in EEG-based emotion recognition problems.


Constrained Clustering and Multiple Kernel Learning without Pairwise Constraint Relaxation

Boecking, Benedikt, Jeanselme, Vincent, Dubrawski, Artur

arXiv.org Machine Learning

Clustering under pairwise constraints is an important knowledge discovery tool that enables the learning of appropriate kernels or distance metrics to improve clustering performance. These pairwise constraints, which come in the form of must-link and cannot-link pairs, arise naturally in many applications and are intuitive for users to provide. However, the common practice of relaxing discrete constraints to a continuous domain to ease optimization when learning kernels or metrics can harm generalization, as information which only encodes linkage is transformed to informing distances. We introduce a new constrained clustering algorithm that jointly clusters data and learns a kernel in accordance with the available pairwise constraints. To generalize well, our method is designed to maximize constraint satisfaction without relaxing pairwise constraints to a continuous domain where they inform distances. We show that the proposed method outperforms existing approaches on a large number of diverse publicly available datasets, and we discuss how our method can scale to handling large data.